Goto

Collaborating Authors

 Allegheny County


Jay Glazer hints 2026 NFL Draft could be rocked by bombshell move

FOX News

Edward Cabrera's strikeout prop is the play as struggling Phillies face surging Cubs today Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Longtime NASCAR crew chief tells wild story about one of the sport's biggest characters WNBA finally embraces Caitlin Clark's stardom with unprecedented national TV schedule Why are the Mets so bad? Flyers mascot Gritty pens letter to fans ahead of first playoff game... eight years after he debuted Hasan Piker justifies'social murder' of CEO Fox News celebrates'Bring Your Kids to Work Day' Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution Steve Hilton praised for'offering solutions' in CA gubernatorial debate Middle East tensions escalate over US blockade, Iran's actions Fernando Mendoza is expected to go No. 1 to the Raiders, but Glazer says the night has more in store Based on what Jay Glazer's hearing 2026 NFL Draft is about to heat up in a rather big way. What's this bombshell that's going to hit the Draft tonight at 8 p.m. ET from Pittsburgh? Are we about to get a major trade into a top spot? Will a team trade with the Arizona Cardinals to get into the No. 3 spot, which has been a loud rumor this week? I know something that's going on, I just can't say it yet because I was told you can't say it until kind of getting on the clock there, Glazer said on Wednesday during an episode of Wake Up Barstool on Fox Sports.


ChatGPT predicted the first round of the NFL Draft and here's what it said

FOX News

Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Longtime NASCAR crew chief tells wild story about one of the sport's biggest characters WNBA finally embraces Caitlin Clark's stardom with unprecedented national TV schedule Why are the Mets so bad? Flyers mascot Gritty pens letter to fans ahead of first playoff game... eight years after he debuted NFL Draft prospect Rueben Bain Jr. mum about 2024 crash when publicly asked about it for first time Troy Aikman is selling'fire suites,' which are exactly what they sound like Fernando Mendoza's first pitch at Marlins game draws harsh reviews Steve Hilton praised for'offering solutions' in CA gubernatorial debate Middle East tensions escalate over US blockade, Iran's actions Michael Easter and Gary Brecka discuss the'choice' to live to be 100 Sen Ted Cruz calls new deadline with Iran'really consequential' RFK Jr confronted over'raccoon parts' on Capitol Hill Our democracy is not'in crisis,' Sen John Fetterman says The DOJ is'on the offense' here, Andrew Kolvet says OutKick ChatGPT predicted the first round of the NFL Draft and here's what it said Ultimate human vs. machine showdown as OutKick's Dan Z. takes on ChatGPT in a mock draft battle Where Is The Value In This NFL Draft? Jonathan Hutton & Chad Withrow ask Armando Salguero what position has the most value in this year's NFL draft I'm not sure why I do these things to myself, but I decided to go head-to-head with ChatGPT in a mock draft competition. I recently released my final mock draft, and then I asked ChatGPT to predict the entire first round. Below, you will see where we are the same and where we are different.


Cover meets Robbins while Betting on Bounded Data: $\ln n$ Regret and Almost Sure $\ln\ln n$ Regret

Agrawal, Shubhada, Ramdas, Aaditya

arXiv.org Machine Learning

Consider betting against a sequence of data in $[0,1]$, where one is allowed to make any bet that is fair if the data have a conditional mean $m_0 \in (0,1)$. Cover's universal portfolio algorithm delivers a worst-case regret of $O(\ln n)$ compared to the best constant bet in hindsight, and this bound is unimprovable against adversarially generated data. In this work, we present a novel mixture betting strategy that combines insights from Robbins and Cover, and exhibits a different behavior: it eventually produces a regret of $O(\ln \ln n)$ on \emph{almost} all paths (a measure-one set of paths if each conditional mean equals $m_0$ and intrinsic variance increases to $\infty$), but has an $O(\log n)$ regret on the complement (a measure zero set of paths). Our paper appears to be the first to point out the value in hedging two very different strategies to achieve a best-of-both-worlds adaptivity to stochastic data and protection against adversarial data. We contrast our results to those in~\cite{agrawal2025regret} for a sub-Gaussian mixture on unbounded data: their worst-case regret has to be unbounded, but a similar hedging delivers both an optimal betting growth-rate and an almost sure $\ln\ln n$ regret on stochastic data. Finally, our strategy witnesses a sharp game-theoretic upper law of the iterated logarithm, analogous to~\cite{shafer2005probability}.


Non-Stationarity in the Embedding Space of Time Series Foundation Models

Choi, Jinmyeong, Shook, Brad, Dubrawski, Artur

arXiv.org Machine Learning

Time series foundation models (TSFMs) are widely used as generic feature extractors, yet the notion of non-stationarity in their embedding spaces remains poorly understood. Recent work often conflates non-stationarity with distribution shift, blurring distinctions fundamental to classical time-series analysis and long-standing methodologies such as statistical process control (SPC). In SPC, non-stationarity signals a process leaving a stable regime - via shifts in mean, variance, or emerging trends - and detecting such departures is central to quality monitoring and change-point analysis. Motivated by this diagnostic tradition, we study how different forms of distributional non-stationarity - mean shifts, variance changes, and linear trends - become linearly accessible in TSFM embedding spaces under controlled conditions. We further examine temporal non-stationarity arising from persistence, which reflects violations of weak stationarity due to long-memory or near-unit-root behavior rather than explicit distributional shifts. By sweeping shift strength and probing multiple TSFMs, we find that embedding-space detectability of non-stationarity degrades smoothly and that different models exhibit distinct, model-specific failure modes.


PRIM-cipal components analysis

Liu, Tianhao, Díaz-Pachón, Daniel Andrés, Rao, J. Sunil

arXiv.org Machine Learning

EVEN supervised learning is subject to the famous NoFree Lunch Theorems [1]-[3], which say that, in combinatorial optimization, there is no universal algorithm that works better than its competitors for every objective function [4]-[6]. Indeed, David Wolpert has recently proven that, on average, cross-validation performs as well as anti-crossvalidation (choosing among a set of candidate algorithms based on which has the worst out-of-sample behavior) for supervised learning. Still, he acknowledges that "it is hard to imagine any scientist who would not prefer to use [crossvalidation] to using anti-cross-validation" [7]. On the other hand, unsupervised learning has seldom been studied from the perspective of the NFLTs. This may be because the adjective "unsupervised" suggests that no human input is needed, which is misleading as many unsupervised tasks are combinatorial optimization problems that depend on the choice of the objective function. For instance, it is well known that, among the eigenvectors of the covariance matrix, Principal Components Analysis selects those with the largest variances [8]. However, mode-hunting techniques that rely on spectral manipulation aim at the opposite objective: selecting the eigenvectors of the covariance matrix with the smallest variances [9], [10]. Therefore, unlike in supervised learning, where it is difficult to identify reasons to optimize with respect to anti-cross-validation, in unsupervised learning there are strong reasons to reduce dimensionality for variance minimization. D. A. D ıaz-Pach on and T. Liu are with the Division of Biostatistics, University of Miami, Miami, FL, 33136 USA (e-mail: ddiaz3@miami.edu,


Phase transitions in Doi-Onsager, Noisy Transformer, and other multimodal models

Mun, Kyunghoo, Rosenzweig, Matthew

arXiv.org Machine Learning

We study phase transitions for repulsive-attractive mean-field free energies on the circle. For a $\frac{1}{n+1}$-periodic interaction whose Fourier coefficients satisfy a certain decay condition, we prove that the critical coupling strength $K_c$ coincides with the linear stability threshold $K_\#$ of the uniform distribution and that the phase transition is continuous, in the sense that the uniform distribution is the unique global minimizer at criticality. The proof is based on a sharp coercivity estimate for the free energy obtained from the constrained Lebedev--Milin inequality. We apply this result to three motivating models for which the exact value of the phase transition and its (dis)continuity in terms of the model parameters was not fully known. For the two-dimensional Doi--Onsager model $W(θ)=-|\sin(2πθ)|$, we prove that the phase transition is continuous at $K_c=K_\#=3π/4$. For the noisy transformer model $W_β(θ)=(e^{β\cos(2πθ)}-1)/β$, we identify the sharp threshold $β_*$ such that $K_c(β) = K_\#(β)$ and the phase transition is continuous for $β\leq β_*$, while $K_c(β) β_*$. We also obtain the corresponding sharp dichotomy for the noisy Hegselmann--Krause model $W_{R}(θ) = (R-2π|θ|)_{+}^2$ .


Generalization Guarantees on Data-Driven Tuning of Gradient Descent with Langevin Updates

Goyal, Saumya, Rongali, Rohith, Ray, Ritabrata, Póczos, Barnabás

arXiv.org Machine Learning

We study learning to learn for regression problems through the lens of hyperparameter tuning. We propose the Langevin Gradient Descent Algorithm (LGD), which approximates the mean of the posterior distribution defined by the loss function and regularizer of a convex regression task. We prove the existence of an optimal hyperparameter configuration for which the LGD algorithm achieves the Bayes' optimal solution for squared loss. Subsequently, we study generalization guarantees on meta-learning optimal hyperparameters for the LGD algorithm from a given set of tasks in the data-driven setting. For a number of parameters $d$ and hyperparameter dimension $h$, we show a pseudo-dimension bound of $O(dh)$, upto logarithmic terms under mild assumptions on LGD. This matches the dimensional dependence of the bounds obtained in prior work for the elastic net, which only allows for $h=2$ hyperparameters, and extends their bounds to regression on convex loss. Finally, we show empirical evidence of the success of LGD and the meta-learning procedure for few-shot learning on linear regression using a few synthetically created datasets.


DDO-RM for LLM Preference Optimization: A Minimal Held-Out Benchmark against DPO

Zhang, Tiantian, Zuo, Jierui, Wang, Wenping

arXiv.org Machine Learning

This paper reorganizes the current manuscript around the DPO versus DDO-RM preference-optimization project and focuses on two parts: the algorithmic view and the preliminary held-out benchmark. The benchmark asks a narrow question: even in a minimal pairwise chosen-versus-rejected setting, can a reward-guided decision-distribution update outperform a direct pairwise objective? We compare Direct Preference Optimization (DPO) against DDO-RM on EleutherAI/pythia-410m using HuggingFaceH4/ultrafeedback\_binarized, evaluate on the held-out test\_prefs split, and report results for seeds 42, 13, and 3407. Algorithmically, DDO-RM treats each prompt as a finite decision problem over candidate responses. Instead of optimizing only a binary chosen-rejected relation, it forms a policy distribution over candidates, centers reward-model scores under that distribution, and distills a reward-guided target distribution back into the policy. In the current public benchmark, DDO-RM improves mean pair accuracy from 0.5238 to 0.5602, AUC from 0.5315 to 0.5382, and mean margin from 0.1377 to 0.5353 relative to DPO. These are encouraging but still preliminary results: the study covers one model family, one dataset, one held-out evaluation split, and three seeds.


Robust Regression with Adaptive Contamination in Response: Optimal Rates and Computational Barriers

Diakonikolas, Ilias, Gao, Chao, Kane, Daniel M., Pensia, Ankit, Xie, Dong

arXiv.org Machine Learning

We study robust regression under a contamination model in which covariates are clean while the responses may be corrupted in an adaptive manner. Unlike the classical Huber's contamination model, where both covariates and responses may be contaminated and consistent estimation is impossible when the contamination proportion is a non-vanishing constant, it turns out that the clean-covariate setting admits strictly improved statistical guarantees. Specifically, we show that the additional information in the clean covariates can be carefully exploited to construct an estimator that achieves a better estimation rate than that attainable under Huber contamination. In contrast to the Huber model, this improved rate implies consistency even when the contamination is a constant. A matching minimax lower bound is established using Fano's inequality together with the construction of contamination processes that match $m> 2$ distributions simultaneously, extending the previous two-point lower bound argument in Huber's setting. Despite the improvement over the Huber model from an information-theoretic perspective, we provide formal evidence -- in the form of Statistical Query and Low-Degree Polynomial lower bounds -- that the problem exhibits strong information-computation gaps. Our results strongly suggest that the information-theoretic improvements cannot be achieved by polynomial-time algorithms, revealing a fundamental gap between information-theoretic and computational limits in robust regression with clean covariates.


The Generalised Kernel Covariance Measure

Bergen, Luca, Sejdinovic, Dino, Didelez, Vanessa

arXiv.org Machine Learning

We consider the problem of conditional independence (CI) testing and adopt a kernel-based approach. Kernel-based CI tests embed variables in reproducing kernel Hilbert spaces, regress their embeddings on the conditioning variables, and test the resulting residuals for marginal independence. This approach yields tests that are sensitive to a broad range of conditional dependencies. Existing methods, however, rely heavily on kernel ridge regression, which is computationally expensive when properly tuned and yields poorly calibrated tests when left untuned, which limits their practical usefulness. We propose the Generalised Kernel Covariance Measure (GKCM), a regression-model-agnostic kernel-based CI test that accommodates a broad class of regression estimators. Building on the Generalised Hilbertian Covariance Measure framework (Lundborg et al., 2022), we characterise conditions under which GKCM satisfies uniform asymptotic level guarantees. In simulations, GKCM paired with tree-based regression models frequently outperforms state-of-the-art CI tests across a diverse range of data-generating processes, achieving better type I error control and competitive or superior power.